sdr subspace
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Wisconsin (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
Sufficient dimension reduction for classification using principal optimal transport direction
Sufficient dimension reduction is used pervasively as a supervised dimension reduction approach. Most existing sufficient dimension reduction methods are developed for data with a continuous response and may have an unsatisfactory performance for the categorical response, especially for the binary-response. To address this issue, we propose a novel estimation method of sufficient dimension reduction subspace (SDR subspace) using optimal transport. The proposed method, named principal optimal transport direction (POTD), estimates the basis of the SDR subspace using the principal directions of the optimal transport coupling between the data respecting different response categories. The proposed method also reveals the relationship among three seemingly irrelevant topics, i.e., sufficient dimension reduction, support vector machine, and optimal transport. We study the asymptotic properties of POTD and show that in the cases when the class labels contain no error, POTD estimates the SDR subspace exclusively. Empirical studies show POTD outperforms most of the state-of-the-art linear dimension reduction methods.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Wisconsin (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
Sufficient dimension reduction for classification using principal optimal transport direction
Sufficient dimension reduction is used pervasively as a supervised dimension reduction approach. Most existing sufficient dimension reduction methods are developed for data with a continuous response and may have an unsatisfactory performance for the categorical response, especially for the binary-response. To address this issue, we propose a novel estimation method of sufficient dimension reduction subspace (SDR subspace) using optimal transport. The proposed method, named principal optimal transport direction (POTD), estimates the basis of the SDR subspace using the principal directions of the optimal transport coupling between the data respecting different response categories. The proposed method also reveals the relationship among three seemingly irrelevant topics, i.e., sufficient dimension reduction, support vector machine, and optimal transport.
Sufficient dimension reduction for classification using principal optimal transport direction
Meng, Cheng, Yu, Jun, Zhang, Jingyi, Ma, Ping, Zhong, Wenxuan
Sufficient dimension reduction is used pervasively as a supervised dimension reduction approach. Most existing sufficient dimension reduction methods are developed for data with a continuous response and may have an unsatisfactory performance for the categorical response, especially for the binary-response. To address this issue, we propose a novel estimation method of sufficient dimension reduction subspace (SDR subspace) using optimal transport. The proposed method, named principal optimal transport direction (POTD), estimates the basis of the SDR subspace using the principal directions of the optimal transport coupling between the data respecting different response categories. The proposed method also reveals the relationship among three seemingly irrelevant topics, i.e., sufficient dimension reduction, support vector machine, and optimal transport. We study the asymptotic properties of POTD and show that in the cases when the class labels contain no error, POTD estimates the SDR subspace exclusively. Empirical studies show POTD outperforms most of the state-of-the-art linear dimension reduction methods.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Wisconsin (0.04)
- Asia > Middle East > Jordan (0.04)
Learning Fair Representations for Kernel Models
Tan, Zilong, Yeom, Samuel, Fredrikson, Matt, Talwalkar, Ameet
Fairness has emerged as a key issue in machine learning as it is increasingly used in areas like hiring [Dastin, 2018], healthcare[Gupta and Mohammad, 2017], and criminal justice [Equivant, 2019]. In particular, models' predictions should not lead to decisions that discriminate on the basis of a legally protected attribute, such as race or gender. Among the proposals to address this issue, a growing body of work focuses on learning et al., 2017, del Barrio et al., 2018, Feldmanfair representations of data for downstream modeling [Calmon 2015, Johndrow and Lum, 2019, Kamiran and Calders, 2012]. Most of these approaches are modelet al., agnostic, which provides flexibility when working with the learned representations, but comes at the cost of potentially suboptimal results in terms of both fairness and accuracy. In this work, we present a new approach for fair representation learning that takes into account the target hypothesis class of models that will be learned from the representation. Specifically, we show how to leverage information about the reproducing kernel Hilbert space (RKHS) to learn a fair representation for kernel-based models with provable fairness and accuracy guarantees. Our approach builds on the classic Sufficient Dimension Reduction (SDR) framework [Li, 1991, Cook 1991, Cook, 1998, Fukumizu et al., 2004, 2009, Wu et al., 2009, Cook and Forzani, 2009]and Weisberg, which is used to compute a low-dimensional projection of the feature vector X that captures all information related to the response Y. Our key insight is that we can instead perform SDR with respect to the protected attributes S, and then take the orthogonal complement of the resulting projection to obtain a fair subspace of the RKHS that captures information in X unrelated to S. We show that functions in the fair subspace 2.2), and we leverage this fact to prove that our approachwill be independent of S under mild conditions (§
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Law (0.88)
- Government > Regional Government (0.46)
Subspace-Induced Gaussian Processes
We present a new Gaussian process (GP) regression model where the covariance kernel is indexed or parameterized by a sufficient dimension reduction subspace of a reproducing kernel Hilbert space. The covariance kernel will be low-rank while capturing the statistical dependency of the response to the covariates, this affords significant improvement in computational efficiency as well as potential reduction in the variance of predictions. We develop a fast Expectation-Maximization algorithm for estimating the parameters of the subspace-induced Gaussian process (SIGP). Extensive results on real data show that SIGP can outperform the standard full GP even with a low rank-$m$, $m\leq 3$, inducing subspace.